Here are a few questions I explore in recent work and work in progress:
- Can we meaningfully ascribe any form of semantic competence to language models?
- Can language models trained on text only overcome the grounding problem?
- Can research on language models inform linguistic theory?
- Can neural networks have compositionally structured representations?
- Do language models have communicative intentions?
- Can we solve the alignment problem for language models?